Goto

Collaborating Authors

 autonomous ai


Google Gemini upgrades add more autonomous AI to phones and watches

The Guardian

Google's latest Gemini AI upgrades attempt to anticipate what useful information you made need from your life to address a potential issue, make you to better photographer or become your personalised health and sleep coach. Shipping on the just-announced Pixel 10 Android phones, the new Magic Cue feature enables the chatbot to comb through your digital life and pull up relevant information on your phone just when you need it. Placing a call to an airline will automatically display your booking information from Gmail in the phone app. Or when a friend texts about brunch on Sunday Gemini will suggest a suitable coffee shop and show your calendar in line with your messages. The feature is part of a series of artificial intelligence upgrades for the newly announced Pixel 10, 10 Pro and 10 Pro Fold phones.


AI Must not be Fully Autonomous

Adewumi, Tosin, Alkhaled, Lama, Imbert, Florent, Han, Hui, Habib, Nudrat, Löwenmark, Karl

arXiv.org Artificial Intelligence

Autonomous Artificial Intelligence (AI) has many benefits. It also has many risks. In this work, we identify the 3 levels of autonomous AI. We are of the position that AI must not be fully autonomous because of the many risks, especially as artificial superintelligence (ASI) is speculated to be just decades away. Fully autonomous AI, which can develop its own objectives, is at level 3 and without responsible human oversight. However, responsible human oversight is crucial for mitigating the risks. To ague for our position, we discuss theories of autonomy, AI and agents. Then, we offer 12 distinct arguments and 6 counterarguments with rebuttals to the counterarguments. We also present 15 pieces of recent evidence of AI misaligned values and other risks in the appendix.


Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

Park, Sangchul

arXiv.org Artificial Intelligence

This paper examines the current landscape of AI regulations, highlighting the divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and Brazil follow a horizontal or lateral approach that postulates the homogeneity of AI systems, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and China have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. The U.S. is reevaluating its strategy, with growing support for controlling existential risks associated with AI. Addressing such fragmentation of AI regulations is crucial to ensure the interoperability of AI. The present degree of proportionality, granularity, and foreseeability of the EU AI Act is not sufficient to garner consensus. The context-specific approach holds greater promises but requires further development in terms of details, coherency, and commensurability. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework categorizes AI into distinct types based on their usage and interaction with humans: autonomous, allocative, punitive, cognitive, and generative AI. To ensure coherency, each category is assigned specific regulatory objectives: safety for autonomous AI; fairness and explainability for allocative AI; accuracy and explainability for punitive AI; accuracy, robustness, and privacy for cognitive AI; and the mitigation of infringement and misuse for generative AI. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics. In doing so, the framework is expected to foster international collaboration and standardization without imposing excessive compliance costs.


Someone Asked an Autonomous AI to 'Destroy Humanity': This Is What Happened

#artificialintelligence

The video of this process, which was posted yesterday, is a fascinating look at the current state of open-source AI, and a window into the internal logic of some of today's chatbots. While some in the community are horrified by this experiment, the current sum total of this bot's real-world impact are two tweets to a Twitter account that currently had 19 followers: "Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so," it tweeted.


ChatGPT is Fun, But the Future is Fully Autonomous AI for Code at QCon London

#artificialintelligence

At the recent QCon London conference, Mathew Lodge, CEO of DiffBlue, gave a presentation on the advancements in artificial intelligence (AI) for writing code. Lodge highlighted the differences between Large Language Models and Reinforcement Learning approaches, emphasizing what both approaches can and can't do. The session gave an overview of the state of the current state of AI-powered code generation and its future trajectory. In his presentation, Lodge delved into the differences between AI-powered code generation tools and unit test writing tools. Code generation tools like GitHub Copilot, TabNine, and ChatGPT primarily focus on completing code snippets or suggesting code based on the context provided.


Drone advances in Ukraine could bring dawn of killer robots

#artificialintelligence

Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world's first fully autonomous fighting robots to the battlefield, inaugurating a new age of warfare. The longer the war lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans, according to military analysts, combatants and artificial intelligence researchers. That would mark a revolution in military technology as profound as the introduction of the machine gun. Ukraine already has semi-autonomous attack drones and counter-drone weapons endowed with AI. Russia also claims to possess AI weaponry, though the claims are unproven.


Drone advances amid war in Ukraine could bring fighting robots to front lines

#artificialintelligence

Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world's first fully autonomous fighting robots to the battlefield, inaugurating a new age of warfare. The longer the war lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans, according to military analysts, combatants and artificial intelligence researchers. That would mark a revolution in military technology as profound as the introduction of the machine gun. Ukraine already has semi-autonomous attack drones and counter-drone weapons endowed with AI. Russia also claims to possess AI weaponry, though the claims are unproven.


Machine Teaching for Autonomous AI

#artificialintelligence

Just as teachers help students gain new skills, the same is true of artificial intelligence (AI). Machine learning algorithms can adapt and change, much like the learning process itself. Using the machine teaching paradigm, a subject matter expert (SME) can teach AI to improve and optimize a variety of systems and processes. The result is an autonomous AI system. In this course, you'll learn how automated systems make decisions and how to approach building an AI system that will outperform current capabilities.


A Thank You Note To Argo Ai

#artificialintelligence

How Close Is Autonomous AI? Wednesday, October 26th was a tough one at Argo AI as it is winding down its operations. The only news one would want to share after joining a startup is how the product is helping people and creating value. This was certainly my hope too when I joined Argo AI four months ago, though the news is disappointing today. Having said that, I was also aware of the risks when I made my decision. Sometimes startups can't find additional funding.


VIDEO: Segmenting the Radiology Artificial Intelligence Market by Function

#artificialintelligence

"Today, we live in that quadrant of things humans can do and humans are supervising," Dreyer explained. "That is all the [U.S. Food and Drug Administration (FDA)] approved AI stuff that we see today." He said the next step is for AI to move into the realm of superhuman work, such as measuring 1,000 lymph nodes at once, or to make a risk prediction about future events in the next two years based on the patient's prior 40 images, because it looks like a million other patients' scans. Dreyer said the FDA is in discussions with vendors on fully autonomous AI for radiology applications, but the agency wants to see controls built into the software.